61 research outputs found

    Measuring Behavior using Motion Capture

    Get PDF
    Motion capture systems, using optical, magnetic or mechanical sensors are now widely used to record\ud human motion. Motion capture provides us with precise measurements of human motion at a very high\ud recording frequency and accuracy, resulting in a massive amount of movement data on several joints of the\ud body or markers of the face. But how do we make sure that we record the right things? And how can we\ud correctly interpret the recorded data?\ud In this multi-disciplinary symposium, speakers from the field of biomechanics, computer animation, human\ud computer interaction and behavior science come together to discus their methods to both record motion and\ud to extract useful properties from the data. In these fields, the construction of human movement models from\ud motion capture data is the focal point, although the application of such models differs per field. Such\ud models can be used to generate and evaluate highly adaptable and believable animation on virtual\ud characters in computer animation, to explore the details of gesture interaction in Human Computer\ud Interaction applications, to identify patterns related to affective states or to find biomechanical properties of\ud human movement

    An Animation Framework for Continuous Interaction with Reactive Virtual Humans

    Get PDF
    We present a complete framework for animation of Reactive Virtual Humans that offers a mixed animation paradigm: control of different body parts switches between keyframe animation, procedural animation and physical simulation, depending on the requirements of the moment. This framework implements novel techniques to support real-time continuous interaction. It is demonstrated on our interactive Virtual Conductor

    Elckerlyc in practice - on the integration of a BML Realizer in real applications

    Get PDF
    Building a complete virtual human application from scratch is a daunting task, and it makes sense to rely on existing platforms for behavior generation. When building such an interactive application, one needs to be able to adapt and extend the capabilities of the virtual human offered by the platform, without having to make invasive modications to the platform itself. This paper describes how Elckerlyc, a novel platform for controlling a virtual human, offers these possibilities

    Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

    Get PDF
    Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Using Virtual Agents to Guide Attention in Multi-task Scenarios

    Get PDF
    Kulms P, Kopp S. Using Virtual Agents to Guide Attention in Multi-task Scenarios. In: Aylett R, Krenn B, Pelachaud C, Shimodaira H, eds. Intelligent Virtual Agents. Lecture Notes in Computer Science. Vol 8108. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013: 295-302
    corecore